Sorry folks! I know I’m late with this material. But no worries! We’ll take a closer look at it on Wednesday!
Opinion dynamics
In this chapter we will implement an opinion dynamics agent-based model (ABM), which simulates how individuals in a population form, change, and influence opinions through local interactions. These models are increasingly used in social sciences, political science, and behavioral economics to explore how consensus, polarization, or fragmentation emerge from individual-level behaviors.
By the end of this chapter, you will have the learned about the most basic opinion dynamics models and be prepared to make it from there.
🎯 Learning Goals
Understand the basics of opinion dynamics models
Implement a simple opinion dynamics ABM in R
Opinion Formation Models
Opinion dynamics models describe how agents update their beliefs or opinions based on interactions with others. These models help us understand how consensus, polarization, or persistent disagreement can emerge in social systems. Each agent is typically characterized by an opinion value, and rules are defined for how opinions change over time based on social influence.
Modeling Opinions
Before we begin modeling how opinions change, we must first decide how to represent opinions in the model. The most basic approach assumes that each agent \(i\) holds a single opinion value \(o_i\). However, more complex models allow agents to hold multiple opinions, which can be useful when studying how different beliefs correlate or cluster into what are sometimes called cultural bundles (see e.g., DellaPosta, Shi, and Macy (2015)).
Opinions can be modeled in several ways, depending on the nature of the belief and the modeling goal:
Binary: Two possible states, e.g., agree/disagree, yes/no.
Categorical: Multiple unordered categories (e.g., political party affiliation).
Ordinal: Categories with a meaningful order but no fixed distance (e.g., strongly disagree to strongly agree).
Interval: Values on a continuous scale with fixed distances but no true zero (e.g., temperature-like scales).
Ratio: Like interval scales, but with a meaningful zero point (e.g., from -1 = disagree, 0 = neutral, to 1 = agree).
The choice of scale affects which mathematical tools and interaction rules are appropriate for the model. For example, averaging makes sense with interval or ratio scales but not with categorical or binary opinions.
In this chapter, we will implement a basic opinion dynamics model in R, where each agent holds a single opinion, \(o_i\). Opinions are represented on a continuous interval scale between 0 and 1, where \(o_i = 0\) means full disagreement and \(o_i = 1\) means full agreement. This simplification allows us to focus on the core dynamics of social influence while still capturing meaningful patterns such as convergence or polarization.
We now turn to the mechanics of how these opinions evolve over time through positive and negative influence.
Positive Social Influence
Positive social influence plays a central role in shaping human opinions, beliefs, and behaviors. When individuals interact, they often adjust their views to become more similar to those around them. This phenomenon is known as positive social influence or simply as assimilation—a convergence of opinions over time.
Several mechanisms contribute to this effect:
Persuasion through argumentation: People may change their opinions when they encounter convincing reasoning or evidence (Myers 1978).
Desire for similarity: Social learning theory suggests that individuals seek alignment with their peers to maintain social harmony (Akers et al. 1979).
Uncertainty and informational cascades: When uncertain, individuals may follow the choices of others, assuming those choices reflect better information (Bikhchandani, Hirshleifer, and Welch 1992).
In mathematical terms, we can express the process of opinion formation as a time-dependent update of an individual’s opinion. Let \(o_{i,t}\) be the opinion of agent \(i\) at time \(t\). The updated opinion at time \(t+1\) is then:
\[
o_{i,t+1} \;=\; o_{i,t} \;+\; \Delta o_{i,t}
\;=\; o_{i,t} \;+\; \mu \sum_{j} w_{ij}\bigl(o_{j,t} - o_{i,t}\bigr)
\] In this equation:
\(w_{ij}\) represents the influence weight that agent \(j\) has on agent \(i\).
In a simple unweighted network, \(w_{ij} = 1\) if agents \(i\) and \(j\) are connected, and \(0\) otherwise.
\(\mu\) is a learning rate or conformity parameter, which determines how quickly agents adjust their opinions.
\((o_{j,t} - o_{i,t})\) is the opinion difference between agent \(j\) and agent \(i\) at time \(t\).
This formulation captures positive social influence—agents move their opinions closer to those they interact with. The total change is a weighted average of the differences between the focal agent and its neighbors.
As a result of these interactions, agents’ opinions tend to converge over time. That is, their attitudes, beliefs, or behaviors become more similar, especially in tightly connected networks.
Below, you will see an example simulation illustrating this convergence effect under positive social influence with \(N=100\) in a complete network (the grey interior is simply all the connections in this network):
As you can see we always end up with all agents having the same opinion. But this is very different from what we observe in the real world.
“If people tend to become more alike in their beliefs, attitudes, and behavior when they interact, why do not all such differences eventually disappear?”
— Robert Axelrod (1997)
This question highlights a key paradox: If assimilation is so prevalent, why do we continue to observe persistent differences in opinion?
Bounded Confidence
One influential and widely studied opinion dynamics model is the bounded confidence model. It introduces a simple but powerful idea: agents only are influenced only by others whose opinions are sufficiently similar—within a specific tolerance threshold, denoted by \(\epsilon\).
The bounded confidence model is grounded in several well-established psychological theories:
Confirmation bias: Individuals tend to seek and favor information that confirms their existing beliefs, while avoiding contradictory input (Nickerson 1998).
Social judgment theory: When evaluating others’ views, people categorize them into:
A latitude of acceptance: opinions close enough to one’s own to be considered reasonable.
A latitude of non-commitment: neutral or ambiguous opinions.
A latitude of rejection: opinions perceived as too far away to be considered seriously (Sherif and Hovland 1961).
The bounded confidence model primarily captures the latitude of acceptance, by allowing influence only from sufficiently similar opinions.
In the bounded confidence model, opinion updates take the form:
Here, \(\epsilon\) is the tolerance threshold—a key parameter in the bounded confidence model. It defines the maximum opinion difference that an agent is willing to accept when considering influence from another agent. In other words, agent \(i\) will only be influenced by agent \(j\) if their opinions are sufficiently similar, that is, within a distance of \(\epsilon\).1
This captures the idea that people are selectively open to influence, and tend to ignore or dismiss opinions that deviate too far from their own.
The value of \(\epsilon\) critically determines the dynamics. Below is an interactive simulation of a bounded confidence model. You can experiment with the \(\epsilon\) parameter to observe how it influences the number and stability of opinion clusters.
Epsilon:
High\(\epsilon\) (broad tolerance): leads to consensus—all opinions converge.
Low\(\epsilon\) (narrow tolerance): leads to clustering as subgroups of similar opinions form isolated clusters that no longer interact with others.
If \(\epsilon\) is small enough, multiple distinct opinion clusters can emerge, a phenomenon known as fragmentation. Unlike the extreme polarization seen in some models, bounded confidence typically leads to moderate separation rather than extremism.
Non-complete Networks
So far, we have assumed that agents interact within a complete network—where every agent is connected to every other agent. While useful for theoretical simplicity, this assumption is rarely realistic. In real social systems, people typically interact with only a limited number of others, often through sparse and structured networks.
Let’s now explore opinion dynamics in a preferential attachment network, where agents have only a few connections. This type of network mimics real-world social structures where a few individuals (hubs) are highly connected, while most have only a small number of ties.
The simulation below uses a preferential attachment (PA) network with \(m\) new edge per node:
Epsilon:
M (# new edges):
The simulation above shows how sparse and structured connectivity can dramatically affect opinion dynamics. In contrast to complete networks, limited connectivity in PA networks can lead to the coexistence of isolated and incompatible opinion groups.
As illustrated in Figure 1, two distinct scenarios may emerge:
A) Isolated Extremes: When agents with extreme opinions are only weakly connected to the rest of the network, their views persist in isolation. These agents form stable, disconnected opinion clusters, immune to the influence of moderates.
B) Bridge Extremists: In some cases, an extremist agent may occupy a central or bridging position in the network. Paradoxically, such agents—despite holding extreme views—can block communication between more moderate groups. In doing so, they act as structural bottlenecks. Ironically, they become essential to maintaining overall opinion diversity by limiting consensus formation between otherwise connectable subgroups.
Figure 1: PA network with bounded confidence influence. A) with \(\epsilon = 0.3\) B) with \(\epsilon = 0.4\).
Negative Social Influence
While many interactions lead to assimilation, not all do. Individuals may also diverge from others, especially when they seek to assert individuality or reject disliked sources of influence. This can result in polarization, where opinions become more extreme or distinct over time.
Possible mechanisms for negative social influence include:
Repulsion from dissimilar or disliked others: According to balance theory and cognitive dissonance theory, people may adjust their opinions away from those they disapprove of (Heider 1946; Festinger 1957).
Striving for uniqueness: Individuals may resist conformity and adopt different views to maintain a sense of individuality (Snyder and Fromkin 1980). The optimal distinctiveness theory (Brewer 1991) suggests people balance the need to belong with the need to be unique.
These mechanisms can cause a society’s opinions to become increasingly diverse or polarized, rather than unified.
One variant of \(f_w\) is a linearly declining influence function:
In this case, influence decreases linearly as the opinion distance increases as can be seen in Figure 2 (C). When two agents completely agree (\(|o_j - o_i| = 0\)), the influence is maximal (\(f_w = 1\)). When they completely disagree, the influence becomes negative (\(f_w = -1\)), potentially modeling repulsion or negative social influence.
Figure 2: Three common ways to model social influence using the function \(f_w(o_i, o_j)\), all plotted over the range of opinion differences \(|o_j - o_i|\) from 0 to 1. A) Constant Influence, B) Bounded Confidence, C) Linear Decline.
When \(f_w\) becomes negative for large opinion distances, the model begins to reflect negative social influence: instead of becoming more alike, agents actively diverge from others they strongly disagree with. This can lead to polarization, where groups move further apart over time.
This mechanism provides a formal way to capture observed behaviors such as identity-driven disagreement, or active rejection of opposing views—phenomena frequently seen in political discourse and social media dynamics.
Let’s observe this dynamic again in a simulation. Instead of using a fully connected network, we will use a PA network again. Here, an \(m\) of 8 is sufficient to ensure full polarization.
M (# edges):
Empirical Reality and More Complex Models
So far, we have explored three foundational models of opinion dynamics. Each of these models generates distinct patterns of opinion distribution—such as consensus, fragmentation, or polarization—depending on their assumptions and parameters. However, when we look at empirical data, the we see typically patterns like in Figure 3.
Figure 3: Typical distributions of opinions in a society. Empirically, both extreme assimilation and polarization are rare.
In real societies, we rarely observe complete consensus or total polarization. Instead, empirical distributions often show:
Clustering with overlap: Some agents agree on certain topics but not on others.
Partial polarization: Subgroups drift apart, but do not reach extreme opposition.
These patterns suggest that the simple mechanisms we’ve studied—while insightful—are insufficient to fully capture the complexity of real-world opinion dynamics.
Vertical Social Influence
Up until now, we’ve focused on horizontal social influence—interactions between agents or nodes of roughly equal status and power. This setup assumes a flat structure: everyone can influence everyone else more or less equally, depending on factors like similarity or proximity in a network.
However, many real-world social systems are hierarchical. Influence flows not just between peers, but also vertically—from more powerful actors to less powerful ones, and sometimes in reverse.
Consider the following examples:
Newspapers and their readers
Political parties and their supporters
Supermarkets and their customers
Each of these relationships reflects a power asymmetry. Elites—such as media organizations, political leaders, or large corporations—typically have greater reach, visibility, and network centrality. They are highly connected nodes with the capacity to shape the behavior and beliefs of many others.
Despite their influence, elites are not autonomous. They depend on the masses for votes, sales, viewership, or legitimacy. This leads to a two possible directions of influence:
Top-down influence: Elites attempt to shape public opinion to serve their goals.
Bottom-up influence: Masses respond, resist, or influence elite behavior through collective feedback—e.g., voting, purchasing choices, or social mobilization.
⬇️ Top-Down Influence
In many contexts, elites can broadcast opinions, set agendas, and steer public discourse. This is especially visible in media and politics, where elite messages shape frames, narratives, and public priorities.
This type of vertical influence can be modeled as a directed network, where:
Influence flows only from elites to followers (opinions of elites are fixed).
Elites have greater connectivity.
Mass agents are more susceptible to elite messaging than vice versa (elites have a heigher weight).
However, this is only one side of the equation.
⬆️ Bottom-Up Influence or The Strategic Aspect of Opinions
Up to this point, we have treated opinions as abstract properties of individuals—something people simply “have,” and which may evolve through social influence. However, in many real-world scenarios, opinions are not just personal beliefs—they are strategic choices.
Think of political parties, media outlets, or businesses: their positions are often shaped not only by internal convictions, but also by the desire to attract support, attention, or customers. A newspaper may adjust its tone to increase readership. A political party may shift its platform to win votes. In these contexts, expressing or adjusting one’s opinion can be a strategic move.
While it might be comforting to imagine that everyone always expresses their “true self,” the reality is more complex. Individuals and organizations may choose their position on an issue not based on what they believe most deeply, but on what is most beneficial in a given context.
🍦 The Ice Cream Vendor Game
To explore the strategic positioning of opinions, consider the classic ice cream vendor game.
Imagine a 100-meter-long beach. Every meter is equally populated, with two potential customers per meter. You are the first ice cream vendor (shown in red), and you choose where to place your booth for the day. After you choose, a second vendor (an AI-controlled competitor, shown in blue) selects their booth location.
Once both vendors are positioned, each customer will walk to the closest vendor to buy their ice cream. The beach is represented as a line segment, and the point halfway between the two vendors acts as a boundary—buyers to the left go to one vendor, buyers to the right to the other.
Your goal: maximize your share of the customers by choosing your location wisely.
If you experimented with the game, you may have discovered the best strategy: go to the center of the beach. Your AI opponent quickly does the same. As a result, both vendors converge at the middle, splitting the customer base evenly.
This pattern is visible in many real-world domains, as illustrated in Figure 4.
Figure 4: Strategic clustering in real-life domains such as shopping, nightlife or product placement.
The same logic applies to opinions and politics. According to the median voter theorem, political parties in a two-party system will move toward the center of the ideological spectrum, where the majority of voters are located. This maximizes their chances of winning elections. In Figure 5, we see how two political parties might adjust their positions to align with the median voter, even if it means sacrificing more radical platforms.
Figure 5: The hypothetical distribution of opinions among voters and parties and the optimal response.
This strategic positioning isn’t limited to two actors. But with three or more parties, the equilibrium becomes unstable—parties may continuously reposition themselves in response to one another, as seen in game theory and balance theory.
However, even in multi-party systems, parties often cluster together rather than polarize, particularly when voters reward moderation. For example, in Figure 6, we see the results of an automatic text analysis of election manifestos by major German political parties (Olbrich and Banisch 2021). Despite their differences, their positions remain relatively close in a shared political space.
Figure 6: Automatic analyses of the election programs of all major parties in Germany, by Olbrich & Banisch 2022.
Interestingly, similar dynamics occur at the individual level. Think about fashion choices: within a social group, people often follow a common style—but not identically. Everyone wants to be similar enough to fit in, but unique enough to stand out.
This tension between conformity and distinctiveness can also be modeled using opinion dynamics. It introduces a layer of strategic identity expression: agents adopt positions based on their social context, not just personal beliefs.
Diffusion in R
Finally the opinion dynamics model in R. This is not very complicated. We can use the basics from our diffusion model 1.
Akers, Ronald L., Marvin D. Krohn, Lonn Lanza-Kaduce, and Marcia Radosevich. 1979. “Social Learning and Deviant Behavior: A Specific Test of a General Theory.”American Sociological Review 44 (4): 636. https://doi.org/10.2307/2094592.
Bikhchandani, Sushil, David Hirshleifer, and Ivo Welch. 1992. “A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades.”Journal of Political Economy 100 (5): 992–1026. https://doi.org/10.1086/261849.
Brewer, Marilynn B. 1991. “The Social Self: On Being the Same and Different at the Same Time.”Personality and Social Psychology Bulletin 17 (5): 475–82. https://doi.org/10.1177/0146167291175001.
DellaPosta, Daniel, Yongren Shi, and Michael Macy. 2015. “Why Do Liberals Drink Lattes?”American Journal of Sociology 120 (5): 1473–1511. https://doi.org/10.1086/681254.
Nickerson, Raymond S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.”Review of General Psychology 2 (2): 175–220. https://doi.org/10.1037/1089-2680.2.2.175.
Olbrich, Eckehard, and Sven Banisch. 2021. “The Rise of Populism and the Reconfiguration of the German Political Space.”Frontiers in Big Data 4 (September): 731349. https://doi.org/10.3389/fdata.2021.731349.
Sherif, Muzafer, and Carl I. Hovland. 1961. Social Judgment: Assimilation and Contrast Effects in Communication and Attitude Change. Oxford, England: Yale University Press.
Note that if the tolerance threshold \(\epsilon\) is set greater than or equal to the maximum possible opinion difference—which, in the case of opinions ranging from 0 to 1, means \(\epsilon \geq 1\)—then all agents are always within each other’s tolerance range. In this case, the bounded confidence model reduces to a plain positive influence model.↩︎
Copyright
Copyright Sascha Grehl, 2025. All Rights Reserved
Source Code
--- title: "Experimental Sociology - Week 10" subtitle: "Opinion dynamics" date: "2025-06-11" editor: markdown: wrap: 72 bibliography: references.bib---```{r sim_helpers}#| echo: FALSElibrary(glue)sim_snippet <- function(prefix, input = NULL, canvasW = 400, canvasH = 400, tsW = 300, tsH = 200, tsStyle = "border:1px solid #ccc; margin:100px 0 100px 20px;") { inputs_html <- "" if (!is.null(input)) { # for each named control pieces <- lapply(names(input), function(id) { param <- input[[id]] label <- param$label %||% param[[1]] type <- param$type %||% "text" attrs <- param[names(param) != "label" & names(param) != "type"] attr_str <- "" if (length(attrs)) { attr_str <- paste0( " ", names(attrs), "=\"", attrs, "\"", collapse = "" ) } glue::glue( '<p>{label}<label for="{prefix}_{id}" id="{prefix}_{id}_label">#</label><br><input id="{prefix}_{id}" type="{type}"{attr_str}></p>' ) }) inputs_html <- paste(pieces, collapse = "\n") }cat(glue('<div class="simulation"><div class="results"><canvas id="{prefix}_canvas" width="{canvasW}" height="{canvasH}"></canvas><canvas id="{prefix}_time_series" width="{tsW}" height="{tsH}" style="{tsStyle}"></canvas></div><div class="settings"><p><button id="{prefix}_reset">Reset</button><button id="{prefix}_start">Run</button><button id="{prefix}_step">Step</button></p></div><div class="input">{inputs_html}</div></div>')) }```::: callout-importantSorry folks! I know I'm late with this material. But no worries! We'lltake a closer look at it on Wednesday!:::# Opinion dynamicsIn this chapter we will implement an **opinion dynamics** agent-basedmodel (ABM), which simulates how individuals in a population form,change, and influence opinions through local interactions. These modelsare increasingly used in social sciences, political science, andbehavioral economics to explore how consensus, polarization, orfragmentation emerge from individual-level behaviors.By the end of this chapter, you will have the learned about the mostbasic opinion dynamics models and be prepared to make it from there.## 🎯 Learning Goals1. Understand the basics of opinion dynamics models2. Implement a simple opinion dynamics ABM in R# Opinion Formation ModelsOpinion dynamics models describe how agents update their beliefs oropinions based on interactions with others. These models help usunderstand how consensus, polarization, or persistent disagreement canemerge in social systems. Each agent is typically characterized by anopinion value, and rules are defined for how opinions change over timebased on social influence.## Modeling OpinionsBefore we begin modeling how opinions change, we must first decide **howto represent opinions** in the model. The most basic approach assumesthat each agent $i$ holds a single opinion value $o_i$. However, morecomplex models allow agents to hold **multiple opinions**, which can beuseful when studying how different beliefs correlate or cluster intowhat are sometimes called **cultural bundles** (see e.g.,@dellaposta2015).Opinions can be modeled in several ways, depending on the nature of thebelief and the modeling goal:- **Binary**: Two possible states, e.g., *agree/disagree*, *yes/no*.- **Categorical**: Multiple unordered categories (e.g., political party affiliation).- **Ordinal**: Categories with a meaningful order but no fixed distance (e.g., *strongly disagree* to *strongly agree*).- **Interval**: Values on a continuous scale with fixed distances but no true zero (e.g., temperature-like scales).- **Ratio**: Like interval scales, but with a meaningful zero point (e.g., from -1 = disagree, 0 = neutral, to 1 = agree).The choice of scale affects which mathematical tools and interactionrules are appropriate for the model. For example, averaging makes sensewith interval or ratio scales but not with categorical or binaryopinions.In this chapter, **we will implement** a basic opinion dynamics model inR, where each agent holds **a single opinion**, $o_i$. Opinions arerepresented on a continuous interval scale between 0 and 1, where$o_i = 0$ means full disagreement and $o_i = 1$ means full agreement.This simplification allows us to focus on the core dynamics of socialinfluence while still capturing meaningful patterns such as convergenceor polarization.We now turn to the mechanics of how these opinions evolve over timethrough positive and negative influence.## Positive Social InfluencePositive social influence plays a central role in shaping humanopinions, beliefs, and behaviors. When individuals interact, they oftenadjust their views to become more similar to those around them. Thisphenomenon is known as **positive social influence** or simply as**assimilation**—a convergence of opinions over time.Several mechanisms contribute to this effect:- **Persuasion through argumentation**: People may change their opinions when they encounter convincing reasoning or evidence[@myers1978].- **Desire for similarity**: Social learning theory suggests that individuals seek alignment with their peers to maintain social harmony [@akers1979].- **Uncertainty and informational cascades**: When uncertain, individuals may follow the choices of others, assuming those choices reflect better information [@bikhchandani1992].- **Social pressure and conformity**: Individuals may conform to perceived social norms or group expectations [@festinger1950; @homans1951; @wood2000].In mathematical terms, we can express the process of opinion formationas a time-dependent update of an individual's opinion. Let $o_{i,t}$ bethe opinion of agent $i$ at time $t$. The updated opinion at time $t+1$is then:$$o_{i,t+1} \;=\; o_{i,t} \;+\; \Delta o_{i,t} \;=\; o_{i,t} \;+\; \mu \sum_{j} w_{ij}\bigl(o_{j,t} - o_{i,t}\bigr)$$ In this equation:- $w_{ij}$ represents the **influence weight** that agent $j$ has on agent $i$. - In a simple unweighted network, $w_{ij} = 1$ if agents $i$ and $j$ are connected, and $0$ otherwise.- $\mu$ is a **learning rate** or **conformity parameter**, which determines how quickly agents adjust their opinions.- $(o_{j,t} - o_{i,t})$ is the **opinion difference** between agent $j$ and agent $i$ at time $t$.This formulation captures **positive social influence**—agents movetheir opinions closer to those they interact with. The total change is aweighted average of the differences between the focal agent and itsneighbors.As a result of these interactions, agents' opinions tend to **converge**over time. That is, their attitudes, beliefs, or behaviors become moresimilar, especially in tightly connected networks.Below, you will see an example simulation illustrating this convergenceeffect under positive social influence with $N=100$ in a completenetwork (the grey interior is simply all the connections in thisnetwork):```{=html}<script src="https://code.jquery.com/jquery-3.7.1.slim.min.js"></script><script src="js/opinion_formation_sim.js"></script><script> $(function(){ // sim01: no slider → fixed epsilon = 0.1 (for example) new OpinionSim('sim01', { N: 100, network: 'ring', m: Infinity, MU: 0.4, MAX_STEPS: 50, epsilon: 1, // fixed seed: 123 }); });</script>``````{r}#| results: 'asis'#| echo: FALSEsim_snippet("sim01")```As you can see we always end up with all agents having the same opinion.But this is very different from what we observe in the real world.> “If people tend to become more alike in their beliefs, attitudes, and> behavior when they interact, why do not all such differences> eventually disappear?”\> — *Robert Axelrod (1997)*This question highlights a key paradox: If assimilation is so prevalent,why do we continue to observe persistent differences in opinion?### Bounded ConfidenceOne influential and widely studied opinion dynamics model is the**bounded confidence model**. It introduces a simple but powerful idea:agents **only are influenced only by others whose opinions aresufficiently similar**—within a specific *tolerance threshold*, denotedby $\epsilon$.The bounded confidence model is grounded in several well-establishedpsychological theories:- **Confirmation bias**: Individuals tend to seek and favor information that confirms their existing beliefs, while avoiding contradictory input [@nickerson1998].- **Social judgment theory**: When evaluating others' views, people categorize them into: - A **latitude of acceptance**: opinions close enough to one’s own to be considered reasonable. - A **latitude of non-commitment**: neutral or ambiguous opinions. - A **latitude of rejection**: opinions perceived as too far away to be considered seriously [@sherif1961].The bounded confidence model primarily captures the *latitude ofacceptance*, by allowing influence only from sufficiently similaropinions.In the bounded confidence model, opinion updates take the form:$$o_{i,t+1} = o_{i,t} + \Delta o_{i,t} = o_{i,t} + \mu \sum_j f_w(o_{i,t}, o_{j,t}) \cdot w_{ij} \cdot (o_{j,t} - o_{i,t})$$Where $f_w(o_{i,t}, o_{j,t})$ determines whether $j$ has any influenceon $i$ at time $t$. A common form of $f_w$ is:$$f_w(o_{i,t}, o_{j,t}) =\begin{cases}1, & \text{if } |o_{j,t} - o_{i,t}| < \epsilon \\0, & \text{otherwise}\end{cases}$$Here, $\epsilon$ is the **tolerance threshold**—a key parameter in thebounded confidence model. It defines the **maximum opinion difference**that an agent is willing to accept when considering influence fromanother agent. In other words, agent $i$ will only be influenced byagent $j$ if their opinions are **sufficiently similar**, that is,within a distance of $\epsilon$.[^1][^1]: Note that if the tolerance threshold $\epsilon$ is set greater than or equal to the maximum possible opinion difference—which, in the case of opinions ranging from 0 to 1, means $\epsilon \geq 1$—then all agents are always within each other’s tolerance range. In this case, the bounded confidence model reduces to a plain positive influence model.This captures the idea that people are **selectively open toinfluence**, and tend to **ignore or dismiss** opinions that deviate toofar from their own.The value of $\epsilon$ critically **determines the dynamics**. Below isan interactive simulation of a bounded confidence model. You canexperiment with the $\epsilon$ parameter to observe how it influencesthe number and stability of opinion clusters.```{=html}<script> $(function(){ new OpinionSim('sim02', { N: 100, network: 'ring', m: Infinity, MU: 0.4, MAX_STEPS: 50, seed: 123 }); });</script>``````{r}#| results: 'asis'#| echo: FALSEsim_snippet("sim02",input =list(epsilon =list(label="Epsilon: ", type="range", min="0", max="1", step="0.05", value="1")))```- **High** $\epsilon$ (broad tolerance): leads to **consensus**—all opinions converge.- **Low** $\epsilon$ (narrow tolerance): leads to **clustering** as subgroups of similar opinions form isolated clusters that no longer interact with others.If $\epsilon$ is small enough, **multiple distinct opinion clusters**can emerge, a phenomenon known as **fragmentation**. Unlike the extremepolarization seen in some models, bounded confidence typically leads to**moderate separation** rather than extremism.#### Non-complete NetworksSo far, we have assumed that agents interact within a **completenetwork**—where every agent is connected to every other agent. Whileuseful for theoretical simplicity, this assumption is rarely realistic.In real social systems, people typically interact with only a **limitednumber of others**, often through sparse and structured networks.Let’s now explore opinion dynamics in a **preferential attachmentnetwork**, where agents have only a few connections. This type ofnetwork mimics real-world social structures where a few individuals(hubs) are highly connected, while most have only a small number ofties.The simulation below uses a preferential attachment (PA) network with$m$ new edge per node:```{=html}<script> $(function(){ new OpinionSim('sim03', { N: 100, network: 'pa', m: 1, MU: 0.4, MAX_STEPS: 50, seed: 123 }); });</script>``````{r}#| results: 'asis'#| echo: FALSEsim_snippet("sim03",input =list(epsilon =list(label="Epsilon: ", type="range", min="0", max="1", step="0.05", value="1"),m =list(label="M (# new edges): ", type="range", min="1", max="8", step="1", value="1") ) )```The simulation above shows how sparse and structured connectivity candramatically affect opinion dynamics. In contrast to complete networks,limited connectivity in PA networks can lead to the coexistence ofisolated and incompatible opinion groups.As illustrated in @fig-opinion-block, two distinct scenarios may emerge:- **A) Isolated Extremes:** When agents with extreme opinions are only weakly connected to the rest of the network, their views persist in isolation. These agents form stable, disconnected opinion clusters, immune to the influence of moderates.- **B) Bridge Extremists:** In some cases, an extremist agent may occupy a central or bridging position in the network. Paradoxically, such agents—despite holding extreme views—can block communication between more moderate groups. In doing so, they act as structural bottlenecks. Ironically, they become essential to maintaining overall opinion diversity by limiting consensus formation between otherwise connectable subgroups.{#fig-opinion-block}## Negative Social InfluenceWhile many interactions lead to assimilation, not all do. Individualsmay also **diverge** from others, especially when they seek to assertindividuality or reject disliked sources of influence. This can resultin **polarization**, where opinions become more extreme or distinct overtime.Possible mechanisms for negative social influence include:- **Repulsion from dissimilar or disliked others**: According to balance theory and cognitive dissonance theory, people may adjust their opinions away from those they disapprove of [@heider1946; @festinger1957].- **Striving for uniqueness**: Individuals may resist conformity and adopt different views to maintain a sense of individuality[@snyder1980]. The optimal distinctiveness theory [@brewer1991] suggests people balance the need to belong with the need to be unique.These mechanisms can cause a society’s opinions to become increasingly**diverse** or **polarized**, rather than unified.One variant of $f_w$ is a **linearly declining influence function**:$$f_w(o_{i,t}, o_{j,t}) = 1 - 2 \cdot |o_{j,t} - o_{i,t}|$$In this case, influence **decreases linearly** as the opinion distanceincreases as can be seen in @fig-fw-comparison (C). When two agentscompletely agree ($|o_j - o_i| = 0$), the influence is maximal($f_w = 1$). When they completely disagree, the influence becomesnegative ($f_w = -1$), potentially modeling **repulsion** or **negativesocial influence**.```{r}#| label: fig-fw-comparison#| fig-cap: "Three common ways to model social influence using the function $f_w(o_i, o_j)$, all plotted over the range of opinion differences $|o_j - o_i|$ from 0 to 1. A) Constant Influence, B) Bounded Confidence, C) Linear Decline."#| layout-ncol: 3#| fig-align: center#| code-fold: true#| code-summary: Show R Code (Plot)library(ggplot2)library(patchwork)x <-seq(0, 1, length.out =200)# 1. Constant influence (always 1)df1 <-data.frame(x = x, y =rep(1, length(x)))# 2. Bounded confidence (epsilon = 0.3)epsilon <-0.3df2 <-data.frame(x = x, y =ifelse(x < epsilon, 1, 0))# 3. Linearly declining influencedf3 <-data.frame(x = x, y =1-2* x)p1 <-ggplot(df1, aes(x, y)) +geom_line() +labs(title ="Constant Influence", x ="|oᵢ - oⱼ|", y ="f_w") +ylim(-1, 1) +theme_minimal()p2 <-ggplot(df2, aes(x, y)) +geom_line() +labs(title ="Bounded Confidence", x ="|oᵢ - oⱼ|", y ="f_w") +ylim(-1, 1) +theme_minimal()p3 <-ggplot(df3, aes(x, y)) +geom_line() +labs(title ="Linear Decline", x ="|oᵢ - oⱼ|", y ="f_w") +ylim(-1, 1) +theme_minimal()p1 + p2 + p3```When $f_w$ becomes negative for large opinion distances, the modelbegins to reflect negative social influence: instead of becoming morealike, agents actively diverge from others they strongly disagree with.This can lead to polarization, where groups move further apart overtime.This mechanism provides a formal way to capture observed behaviors suchas identity-driven disagreement, or active rejection of opposingviews—phenomena frequently seen in political discourse and social mediadynamics.Let's observe this dynamic again in a simulation. Instead of using afully connected network, we will use a PA network again. Here, an $m$ of8 is sufficient to ensure full polarization.```{=html}<script> $(function(){ new OpinionSim('sim04', { N: 100, network: 'pa', m: 1, MU: 0.4, MAX_STEPS: 50, seed: 123, influence: 'linear_decline' }); });</script>``````{r}#| results: 'asis'#| echo: FALSEsim_snippet("sim04",input =list(m =list(label="M (# edges): ", type="range", min="1", max="8", step="1", value="1") ))```## Empirical Reality and More Complex ModelsSo far, we have explored three foundational models of opinion dynamics.Each of these models generates distinct patterns of opiniondistribution—such as **consensus**, **fragmentation**, or**polarization**—depending on their assumptions and parameters. However,when we look at **empirical data**, the we see typically patterns likein @fig-rl-distribution.{#fig-rl-distribution}In real societies, we rarely observe **complete consensus** or **totalpolarization**. Instead, empirical distributions often show:- **Persistent diversity**: Multiple coexisting opinion groups.- **Clustering with overlap**: Some agents agree on certain topics but not on others.- **Partial polarization**: Subgroups drift apart, but do not reach extreme opposition.These patterns suggest that the simple mechanisms we’ve studied—whileinsightful—are **insufficient to fully capture the complexity ofreal-world opinion dynamics**.# Vertical Social InfluenceUp until now, we've focused on **horizontal socialinfluence**—interactions between agents or nodes of roughly equal statusand power. This setup assumes a flat structure: everyone can influenceeveryone else more or less equally, depending on factors like similarityor proximity in a network.However, many real-world social systems are **hierarchical**. Influenceflows not just between peers, but also **vertically**—from more powerfulactors to less powerful ones, and sometimes in reverse.Consider the following examples:- **Newspapers and their readers**- **Political parties and their supporters**- **Supermarkets and their customers**Each of these relationships reflects a **power asymmetry**. Elites—suchas media organizations, political leaders, or largecorporations—typically have **greater reach, visibility, and networkcentrality**. They are highly connected nodes with the capacity to shapethe behavior and beliefs of many others.Despite their influence, elites are not autonomous. They depend on themasses for **votes**, **sales**, **viewership**, or **legitimacy**. Thisleads to a **two possible directions of influence**:- **Top-down influence**: Elites attempt to shape public opinion to serve their goals.- **Bottom-up influence**: Masses respond, resist, or influence elite behavior through collective feedback—e.g., voting, purchasing choices, or social mobilization.## ⬇️ Top-Down InfluenceIn many contexts, elites can **broadcast opinions**, set agendas, andsteer public discourse. This is especially visible in media andpolitics, where elite messages shape frames, narratives, and publicpriorities.This type of vertical influence can be modeled as a **directednetwork**, where:- Influence flows only from elites to followers (opinions of elites are fixed).- Elites have **greater connectivity**.- Mass agents are **more susceptible** to elite messaging than vice versa (elites have a heigher weight).However, this is only one side of the equation.## ⬆️ Bottom-Up Influence or The Strategic Aspect of OpinionsUp to this point, we have treated opinions as abstract properties ofindividuals—something people simply "have," and which may evolve throughsocial influence. However, in many real-world scenarios, **opinions arenot just personal beliefs—they are strategic choices**.Think of political parties, media outlets, or businesses: theirpositions are often shaped not only by internal convictions, but also bythe **desire to attract support, attention, or customers**. A newspapermay adjust its tone to increase readership. A political party may shiftits platform to win votes. In these contexts, **expressing or adjustingone's opinion can be a strategic move**.While it might be comforting to imagine that everyone always expressestheir "true self," the reality is more complex. Individuals andorganizations may choose their position on an issue **not based on whatthey believe most deeply, but on what is most beneficial in a givencontext**.## 🍦 The Ice Cream Vendor GameTo explore the **strategic positioning of opinions**, consider theclassic *ice cream vendor game*.Imagine a 100-meter-long beach. Every meter is equally populated, with**two potential customers per meter**. You are the first ice creamvendor (shown in red), and you choose where to place your booth for theday. After you choose, a second vendor (an AI-controlled competitor,shown in blue) selects their booth location.Once both vendors are positioned, each customer will walk to the**closest vendor** to buy their ice cream. The beach is represented as aline segment, and the point **halfway between the two vendors** acts asa boundary—buyers to the left go to one vendor, buyers to the right tothe other.Your goal: **maximize your share of the customers** by choosing yourlocation wisely.```{=html}<canvas id="beach" width="800" height="200" style="border: 1px solid #ccc; margin: 20px;"></canvas><script src="js/ice_cream_vendor_game.js"></script>```If you experimented with the game, you may have discovered the beststrategy: **go to the center of the beach**. Your AI opponent quicklydoes the same. As a result, both vendors **converge at the middle**,splitting the customer base evenly.This pattern is visible in many real-world domains, as illustrated in@fig-rl-clustering.{#fig-rl-clustering}The same logic applies to opinions and politics. According to the**median voter theorem**, political parties in a two-party system willmove toward the **center of the ideological spectrum**, where themajority of voters are located. This maximizes their chances of winningelections. In @fig-pol-clustering, we see how two political partiesmight adjust their positions to align with the median voter, even if itmeans sacrificing more radical platforms.{#fig-pol-clustering}This strategic positioning isn't limited to two actors. But with **threeor more parties**, the equilibrium becomes **unstable**—parties maycontinuously reposition themselves in response to one another, as seenin game theory and balance theory.However, even in multi-party systems, parties often **cluster togetherrather than polarize**, particularly when **voters reward moderation**.For example, in @fig-rile, we see the results of an automatic textanalysis of election manifestos by major German political parties[@olbrich2021]. Despite their differences, their positions remainrelatively close in a shared political space.{#fig-rile}Interestingly, similar dynamics occur at the individual level. Thinkabout **fashion choices**: within a social group, people often follow acommon style—but not identically. Everyone wants to be **similar enoughto fit in**, but **unique enough to stand out**.This tension between **conformity and distinctiveness** can also bemodeled using opinion dynamics. It introduces a layer of **strategicidentity expression**: agents adopt positions based on their **socialcontext**, not just personal beliefs.# Diffusion in RFinally the opinion dynamics model in R. This is not very complicated.We can use the basics from our diffusion model 1.We will learn about this in the inclass session.# 🚀 Link to In-Class MaterialHere is the [link for the in-class material](week10_inclass.html).